Llama 3.2 11B Vision Instruct
Llama 3.2-Vision is a multimodal large language model developed by Meta, supporting both image and text inputs, capable of tasks such as visual recognition, image reasoning, and captioning.
Image-to-Text
Transformers Supports Multiple Languages